最小的侵入性手术是高度操作员,依赖于冗长的程序时间,导致患者疲劳和风险。为了减轻这些风险,实时系统可以通过提供对场景的清晰了解并避免在操作过程中避免错误估计来帮助外科医生导航和跟踪工具。尽管已经朝这个方向做出了几项努力,但缺乏不同的数据集,并且非常动态的场景及其在每个患者中的可变性都需要实现强大的系统的重大障碍。在这项工作中,我们对最新基于机器学习的方法进行了系统评价,包括手术工具定位,细分,跟踪和3D场景感知。此外,我们提出了这些发明方法的当前差距和方向,并在这些方法的临床整合背后提供了合理的理性。
translated by 谷歌翻译
6th Generation (6G) industrial wireless subnetworks are expected to replace wired connectivity for control operation in robots and production modules. Interference management techniques such as centralized power control can improve spectral efficiency in dense deployments of such subnetworks. However, existing solutions for centralized power control may require full channel state information (CSI) of all the desired and interfering links, which may be cumbersome and time-consuming to obtain in dense deployments. This paper presents a novel solution for centralized power control for industrial subnetworks based on Graph Neural Networks (GNNs). The proposed method only requires the subnetwork positioning information, usually known at the central controller, and the knowledge of the desired link channel gain during the execution phase. Simulation results show that our solution achieves similar spectral efficiency as the benchmark schemes requiring full CSI in runtime operations. Also, robustness to changes in the deployment density and environment characteristics with respect to the training phase is verified.
translated by 谷歌翻译
Training large, deep neural networks to convergence can be prohibitively expensive. As a result, often only a small selection of popular, dense models are reused across different contexts and tasks. Increasingly, sparsely activated models, which seek to decouple model size from computation costs, are becoming an attractive alternative to dense models. Although more efficient in terms of quality and computation cost, sparse models remain data-hungry and costly to train from scratch in the large scale regime. In this work, we propose sparse upcycling -- a simple way to reuse sunk training costs by initializing a sparsely activated Mixture-of-Experts model from a dense checkpoint. We show that sparsely upcycled T5 Base, Large, and XL language models and Vision Transformer Base and Large models, respectively, significantly outperform their dense counterparts on SuperGLUE and ImageNet, using only ~50% of the initial dense pretraining sunk cost. The upcycled models also outperform sparse models trained from scratch on 100% of the initial dense pretraining computation budget.
translated by 谷歌翻译
Most benchmarks for studying surgical interventions focus on a specific challenge instead of leveraging the intrinsic complementarity among different tasks. In this work, we present a new experimental framework towards holistic surgical scene understanding. First, we introduce the Phase, Step, Instrument, and Atomic Visual Action recognition (PSI-AVA) Dataset. PSI-AVA includes annotations for both long-term (Phase and Step recognition) and short-term reasoning (Instrument detection and novel Atomic Action recognition) in robot-assisted radical prostatectomy videos. Second, we present Transformers for Action, Phase, Instrument, and steps Recognition (TAPIR) as a strong baseline for surgical scene understanding. TAPIR leverages our dataset's multi-level annotations as it benefits from the learned representation on the instrument detection task to improve its classification capacity. Our experimental results in both PSI-AVA and other publicly available databases demonstrate the adequacy of our framework to spur future research on holistic surgical scene understanding.
translated by 谷歌翻译
Modern deep neural networks tend to be evaluated on static test sets. One shortcoming of this is the fact that these deep neural networks cannot be easily evaluated for robustness issues with respect to specific scene variations. For example, it is hard to study the robustness of these networks to variations of object scale, object pose, scene lighting and 3D occlusions. The main reason is that collecting real datasets with fine-grained naturalistic variations of sufficient scale can be extremely time-consuming and expensive. In this work, we present Counterfactual Simulation Testing, a counterfactual framework that allows us to study the robustness of neural networks with respect to some of these naturalistic variations by building realistic synthetic scenes that allow us to ask counterfactual questions to the models, ultimately providing answers to questions such as "Would your classification still be correct if the object were viewed from the top?" or "Would your classification still be correct if the object were partially occluded by another object?". Our method allows for a fair comparison of the robustness of recently released, state-of-the-art Convolutional Neural Networks and Vision Transformers, with respect to these naturalistic variations. We find evidence that ConvNext is more robust to pose and scale variations than Swin, that ConvNext generalizes better to our simulated domain and that Swin handles partial occlusion better than ConvNext. We also find that robustness for all networks improves with network scale and with data scale and variety. We release the Naturalistic Variation Object Dataset (NVD), a large simulated dataset of 272k images of everyday objects with naturalistic variations such as object pose, scale, viewpoint, lighting and occlusions. Project page: https://counterfactualsimulation.github.io
translated by 谷歌翻译
Domain shift is a well-known problem in the medical imaging community. In particular, for endoscopic image analysis where the data can have different modalities the performance of deep learning (DL) methods gets adversely affected. In other words, methods developed on one modality cannot be used for a different modality. However, in real clinical settings, endoscopists switch between modalities for better mucosal visualisation. In this paper, we explore the domain generalisation technique to enable DL methods to be used in such scenarios. To this extend, we propose to use super pixels generated with Simple Linear Iterative Clustering (SLIC) which we refer to as "SUPRA" for SUPeRpixel Augmented method. SUPRA first generates a preliminary segmentation mask making use of our new loss "SLICLoss" that encourages both an accurate and color-consistent segmentation. We demonstrate that SLICLoss when combined with Binary Cross Entropy loss (BCE) can improve the model's generalisability with data that presents significant domain shift. We validate this novel compound loss on a vanilla U-Net using the EndoUDA dataset, which contains images for Barret's Esophagus and polyps from two modalities. We show that our method yields an improvement of nearly 25% in the target domain set compared to the baseline.
translated by 谷歌翻译
Spectral methods provide consistent estimators for community detection in dense graphs. However, their performance deteriorates as the graphs become sparser. In this work we consider a random graph model that can produce graphs at different levels of sparsity, and we show that graph neural networks can outperform spectral methods on sparse graphs. We illustrate the results with numerical examples in both synthetic and real graphs.
translated by 谷歌翻译
This contribution presents a deep learning method for the extraction and fusion of information relating to kidney stone fragments acquired from different viewpoints of the endoscope. Surface and section fragment images are jointly used during the training of the classifier to improve the discrimination power of the features by adding attention layers at the end of each convolutional block. This approach is specifically designed to mimic the morpho-constitutional analysis performed in ex-vivo by biologists to visually identify kidney stones by inspecting both views. The addition of attention mechanisms to the backbone improved the results of single view extraction backbones by 4% on average. Moreover, in comparison to the state-of-the-art, the fusion of the deep features improved the overall results up to 11% in terms of kidney stone classification accuracy.
translated by 谷歌翻译
大型文本对图像模型在AI的演变中取得了显着的飞跃,从而使图像从给定的文本提示中实现了高质量和多样化的图像合成。但是,这些模型缺乏在给定的参考集中模仿受试者的外观,并在不同情况下合成它们的新颖性。在这项工作中,我们提出了一种新的方法,用于“个性化”文本图像扩散模型(将它们专门针对用户的需求)。仅作为一个主题的几张图像给出,我们将验证的文本对图像模型(图像,尽管我们的方法不限于特定模型),以便它学会了将唯一标识符与该特定主题结合。一旦将受试者嵌入模型的输出域中,就可以使用唯一标识符来合成主题的完全新颖的光真逼真的图像在不同场景中的上下文化。通过利用具有新的自动构基特异性的先前保存损失的语义先验嵌入到模型中,我们的技术可以在参考图像中未出现的不同场景,姿势,视图和照明条件中合成主题。我们将技术应用于几个以前无用的任务,包括主题重新定义,文本指导的视图合成,外观修改和艺术渲染(所有这些都保留了主题的关键特征)。项目页面:https://dreambooth.github.io/
translated by 谷歌翻译
微创手术中的手术工具检测是计算机辅助干预措施的重要组成部分。当前的方法主要是基于有监督的方法,这些方法需要大量的完全标记的数据来培训监督模型,并且由于阶级不平衡问题而患有伪标签偏见。但是,带有边界框注释的大图像数据集通常几乎无法使用。半监督学习(SSL)最近出现了仅使用适度的注释数据训练大型模型的一种手段。除了降低注释成本。 SSL还显示出希望产生更强大和可推广的模型。因此,在本文中,我们在手术工具检测范式中介绍了半监督学习(SSL)框架,该框架旨在通过知识蒸馏方法来减轻培训数据的稀缺和数据失衡。在拟议的工作中,我们培训了一个标有数据的模型,该模型启动了教师学生的联合学习,在该学习中,学生接受了来自未标记数据的教师生成的伪标签的培训。我们提出了一个多级距离,在检测器的利益区域头部具有基于保证金的分类损失函数,以有效地将前景类别与背景区域隔离。我们在M2CAI16-Tool-locations数据集上的结果表明,我们的方法在不同的监督数据设置(1%,2%,5%,注释数据的10%)上的优越性,其中我们的模型可实现8%,12%和27的总体改善在最先进的SSL方法和完全监督的基线上,MAP中的%(在1%标记的数据上)。该代码可在https://github.com/mansoor-at/semi-supervise-surgical-tool-det上获得
translated by 谷歌翻译